Introduction
The framework of Markovian (or stochastic) games has, in recent years, become a popular one for the analysis of strategic interaction in a dynamic economic context. In a precise sense, this framework is obtained by merging two of the most widely used paradigms of modern economic analysis: stationary dynamic programming problems (Blackwell 1965, Maitra 1967) and repeated games (Fudenberg and Maskin 1986, Abreu 1988). As in stationary dynamic programming problems, Markovian games posit the existence of a “state” variable that is designed to capture the environment of the game at each point in time, but that moves through time in response to the actions taken in the game. However, as in the repeated games framework, Markovian games permit the existence of multiple decision makers (“players”) in the model.
This merger permits a rich variety of possibilities in a Markovian game; for instance, a player's current actions could affect his future reward prospects in two ways:
through the effect they have on the physical environment in which future decisions must be made;
Through their impact on the behavior of other agents in the model.
Of course, stationary dynamic programming problems permit only the first effect, since they are one-person problems; repeated games permit only the second, since the stage game itself must be unchanging.
As a consequence of these features, the Markovian game framework is ideally suited for the analysis of economic models in which it is necessary to allow for both a changing decision-making environment and imperfect competition.